skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Markopoulos, Panos"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The ability to rapidly understand and label the radio spectrum in an autonomous way is key for monitoring spectrum interference, spectrum utilization efficiency, protecting passive users, monitoring and enforcing compliance with regulations, detecting faulty radios, dynamic spectrum access, opportunistic mesh networking, and numerous NextG regulatory and defense applications. We consider the problem of automatic modulation classification (AMC) by a distributed network of wireless sensors that monitor the spectrum for signal transmissions of interest over a large deployment area. Each sensor receives signals under a specific channel condition depending on its location and trains an individual model of a deep neural network (DNN) accordingly to classify signals. To improve modulation classification accuracy, we consider federated learning (FL) where each individual sensor shares its trained model with a centralized controller, which, after aggregation, initializes its model for the next round of training. Without exchanging any spectrum data (such as in cooperative spectrum sensing), this process is repeated over time. A common DNN is built across the net- work while preserving the privacy associated with signals collected at different locations. Given their distributed nature, the statistics of the data across these sensors are likely to differ significantly. We propose the use of adaptive federated learning for AMC. Specifically, we use FEDADAM -an algorithm using Adam for server optimization – and ex- amine how it compares to the FEDAVG algorithm -one of the standard FL algorithms, which averages client parameters after some local iterations, in particular in challenging scenarios that include class imbalance and/or noise-level imbalance across the network. Our extensive numerical studies over 11 standard modulation classes corroborate the merit of adaptive FL, outperforming its standard alternatives in various challenging cases and for various network sizes. 
    more » « less
  2. Incremental Task learning (ITL) is a category of continual learning that seeks to train a single network for multiple tasks (one after another), where training data for each task is only available during the training of that task. Neural networks tend to forget older tasks when they are trained for the newer tasks; this property is often known as catastrophic forgetting. To address this issue, ITL methods use episodic memory, parameter regularization, masking and pruning, or extensible network structures. In this paper, we propose a new incremental task learning framework based on low-rank factorization. In particular, we represent the network weights for each layer as a linear combination of several rank-1 matrices. To update the network for a new task, we learn a rank-1 (or low-rank) matrix and add that to the weights of every layer. We also introduce an additional selector vector that assigns different weights to the low-rank matrices learned for the previous tasks. We show that our approach performs better than the current state-of-the-art methods in terms of accuracy and forgetting. Our method also offers better memory efficiency compared to episodic memory- and mask-based approaches. Our code will be available at https://github.com/CSIPlab/task-increment-rank-update.git 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
  5. null (Ed.)